fix: reduce memory overflow from checkpoint reads and writes (#344)#571
fix: reduce memory overflow from checkpoint reads and writes (#344)#571
Conversation
- Make append_checkpoint truly append-only (O(1) instead of O(N) read-write-all) - Use BufReader for streaming JSONL reads instead of read_to_string - Eliminate 3 redundant read_all_checkpoints() calls in checkpoint::run() - Pass pre-loaded checkpoints to get_all_tracked_files - Defer char-level attribution pruning to write_all_checkpoints - Use BufWriter for efficient checkpoint serialization Addresses #344 Co-Authored-By: Sasha Varlamov <sasha@sashavarlamov.com>
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
|
No AI authorship found for these commits. Please install git-ai to start tracking AI generated code in your commits. |
|
|
Co-Authored-By: Sasha Varlamov <sasha@sashavarlamov.com>
|
devin review devin review's feedback |
…ent data loss Co-Authored-By: Sasha Varlamov <sasha@sashavarlamov.com>
…during agent loops Co-Authored-By: Sasha Varlamov <sasha@sashavarlamov.com>
…_checkpoint) Co-Authored-By: Sasha Varlamov <sasha@sashavarlamov.com>
| let mut checkpoints = if reset { | ||
| working_log.reset_working_log()?; | ||
| Vec::new() | ||
| } else { | ||
| working_log.read_all_checkpoints()? | ||
| }; |
There was a problem hiding this comment.
🔴 Reordering reset before early-exit causes data loss when reset=true and is_pre_commit=true
When reset=true and is_pre_commit=true, the new code destroys the working log (checkpoints, blobs, INITIAL file) at line 165 before the early-exit check at lines 177-195. After the reset, checkpoints is empty, so has_no_ai_edits is always true (.all() on empty iterator returns true). Additionally, reset_working_log() deletes the INITIAL file (src/git/repo_storage.rs:213-215), so has_initial_attributions is false. Unless inter_commit_move is enabled, the early exit fires and the function returns Ok((0, 0, 0)) — the working log is destroyed but no fresh checkpoint is created.
Root cause: operation reordering
In the old code, the early-exit check at lines 161-178 (LEFT) ran BEFORE the reset at lines 281-287 (LEFT). If the old data had AI checkpoints, all_ai_touched_files() would return non-empty, has_no_ai_edits would be false, and the early exit would NOT fire. The reset would then happen, and the function would proceed to create a fresh checkpoint.
In the new code, the reset happens first at line 164-166 (RIGHT), clearing all checkpoints, blobs, and the INITIAL file. Then the early-exit check at lines 177-195 sees empty checkpoints and an empty INITIAL file, so it almost always fires.
Old behavior with reset=true, is_pre_commit=true, old data had AI edits:
- Early exit does NOT fire (old data has AI files)
- Reset happens
- Fresh checkpoint is created ✓
New behavior:
- Reset happens (old data destroyed)
- Early exit fires (empty checkpoints →
has_no_ai_edits=true) - Returns
(0, 0, 0)— no new checkpoint, data lost ✗
Impact: If reset=true is ever combined with is_pre_commit=true, AI attribution data is permanently destroyed without replacement.
Prompt for agents
In src/commands/checkpoint.rs, the block that reads/resets checkpoints (lines 164-174) was moved before the is_pre_commit early-exit check (lines 176-196). This changes behavior when reset=true and is_pre_commit=true: the working log is destroyed before the early-exit fires, causing data loss. To fix this, move the reset/read block back after the early-exit check, or restructure so that when reset=true the early-exit is skipped. The simplest fix: move lines 164-174 (the read_checkpoints_start block including the reset) to after the is_pre_commit early exit block (after line 196), and for the early-exit check use a separate lightweight read (e.g. working_log.all_ai_touched_files() as before, or read checkpoints once for the early-exit check and then conditionally reset afterward).
Was this helpful? React with 👍 or 👎 to provide feedback.
fix: reduce memory overflow from checkpoint reads and writes (#344)
Summary
Addresses the runaway memory usage (30-60GB) reported in #344 by fixing the highest-impact patterns in checkpoint I/O:
1. Streaming prune-and-rewrite on append (
repo_storage.rs):append_checkpointpreviously read ALL checkpoints into memory, appended one, pruned, then wrote ALL back — O(N) memory per append. Now it streams existing checkpoints one-at-a-time through aBufReader, prunes char-level attributions for files superseded by the new checkpoint, writes to a temp file viaBufWriter, appends the new checkpoint, then atomically renames. Peak memory is one checkpoint + the new checkpoint, rather than all checkpoints. The file stays small throughout long agent loops because pruning happens on every append.2. Eliminate redundant full reads (
checkpoint.rs): A singlecheckpoint::run()call previously triggered 4+ independentread_all_checkpoints()deserializations of the entire JSONL file. Now checkpoints are read once at the top ofrun()and passed through toget_all_tracked_filesvia a newpreloaded_checkpointsparameter.3. Streaming reads (
repo_storage.rs):read_all_checkpointsnow usesBufReaderline-by-line instead offs::read_to_string, avoiding holding the full file string and parsed structs in memory simultaneously.4. BufWriter for writes (
repo_storage.rs):write_all_checkpointsnow streams serialization throughBufWriterinstead of building a full string in memory. An explicitflush()call ensures write errors are propagated rather than silently dropped onBufWriter::drop.All 38 checkpoint-related unit tests pass (31 checkpoint tests + 7 repo_storage tests). No new dependencies added.
Updates since last revision
Major rework: The initial approach deferred char-level attribution pruning to
write_all_checkpoints, but this left un-pruned data in the file during the intra-commit loop where memory issues are worst. The new approach prunes on everyappend_checkpointcall using a streaming read-modify-write pattern that keeps only one checkpoint in memory at a time. Theprune_old_char_attributionsmethod was removed entirely (logic inlined intoappend_checkpoint).Other changes:
write_all_checkpointssignature reverted to&[Checkpoint](no longer needs&mutsince pruning moved to append)writer.flush()?;in bothappend_checkpointandwrite_all_checkpointsto ensure I/O errors are propagatedReview & Testing Checklist for Human
append_checkpointassumes the checkpoint being appended is the newest for its files (clears attributions from older entries with matching files). This should always be true since we append chronologically, but verify no code path appends out-of-order or writes un-pruned checkpoints viawrite_all_checkpointsdirectly.has_no_ai_editslogic equivalence: The early-exit check incheckpoint::run()was rewritten fromall_ai_touched_files().is_empty()tocheckpoints.iter().all(|cp| cp.entries.is_empty() || cp.kind != AiAgent/AiTab). These should be logically equivalent but the double-negative is easy to get wrong — worth a careful trace through both code paths.git commitand that attributions are correctly preserved end-to-end. Unit tests validate correctness but not the memory improvement.append_checkpointwrites tocheckpoints.jsonl.tmpthen renames. If the process crashes mid-write, the temp file is left behind (harmless but clutters.git/ai/). Consider if cleanup is needed.Notes
get_all_tracked_filesgained an optionalpreloaded_checkpointsparameter. Existing callers that don't pass it will still work (reads from disk as before).write_all_checkpointsno longer prunes. If any code path writes un-pruned data viawrite_all_checkpoints, it won't be pruned until the nextappend_checkpoint.Link to Devin run: https://app.devin.ai/sessions/2a46b6eaa71f4f46913488bef2ff52a1
Requested by: @svarlamov